Supporting Software Reuse in Concurrent Object-Oriented Languages: Exploring the Language Design Space1

نویسندگان

  • Michael Papathomas
  • Oscar Nierstrasz
چکیده

REPRESENTATION SPECIFIC REPRESENTATION INDEPENDENT Conditions are expressed in terms of abstract properties of the object and do not refer to the particular implementation ACT++, ROSETTE, PROCOL, PATH EXPRESSIONS Condition are expressed directly on the hidden object state. GUIDE, Hybrid, SINA ACCEPTANCE CONDITIONAL ACCEPTANCE M. Papathomas and O. Nierstrasz 8 the client are not acceptable from the point of view of reusability since the client then cannot be written in a generic fashion. 3. Internal Concurrency: The concurrency constructs should allow for the implementation of objects that service several requests in parallel or are internally parallel for increased execution speed. This should be supported in a way that does not affect the clients so that sequential implementations of objects may be replaced by parallel ones. By the same token it should also be easy to coordinate the execution of already existing objects. 4. Reply Scheduling Transparency: A client should not be forced to wait until the serving object replies. In the mean time it may itself accept further requests or call other objects in parallel. It is also useful to specify that replies are to be sent to a proxy. (This is related to the problem of coping with remote delays [20], though we additionally consider that the client may initiate multiple requests in parallel.) Request scheduling by the client should not require the cooperation of the server since this would limit the ability to combine independently developed clients and servers. 5. Compositionality and Incremental Modification: Existing object classes should be reusable within new contexts without modification. Additionally, mechanisms for incremental modification of classes such as inheritance must be designed with special consideration given to concurrency to allow existing code to cooperate gracefully with modifications and extension [17], [32]. In order to compare the design choices and their combinations with respect to the reuse requirements, we shall refer to an instance of a “generic” concurrent program structure: the administrator inspired by [11]. The administrator is an object that uses a collection of “worker” objects to service requests. An administrator application consists of four main kinds of components. The clients issue requests to the administrator and get back results. The administrator accepts requests from multiple concurrent clients and decomposes them into a number of subrequests. The workload manager maintains the status of workers and pending requests. Workers handle the subrequests and reply to the administrator. The administrator collects the intermediate replies and computes the final results to be returned to clients. The administrator is a very general framework for structuring concurrent applications. For example, workers may be very specialized resources or they may be general-purpose compute servers. The workload manager may seek to maximize parallelism by load balancing or it may allocate jobs to workers based on their individual capabilities. clients administrator workers workload manager 9 Supporting Software Reuse in Concurrent OOPLs The following aspects of the example are specifically related to reuse: • Mutual Exclusion: (i) workload manager reuse – the workload manager must be protected from concurrent requests by the administrator (and its proxies); (ii) worker reuse – workers should similarly be protected as arbitrary objects may be used as workers; • Request Scheduling Transparency: (iii) client reuse – the administrator must be able to interleave (or delay) multiple client requests, but the client should not be required to take special action if the serving object happens to be implemented as an administrator; • Internal Concurrency: (iv) client/worker reuse – the administrator should be open to concurrent implementation (possibly using proxies) without constraining the interface of either clients or workers; • Reply Scheduling Transparency: (v) worker reuse – it must be possible for the administrator to issue requests to workers concurrently without special action by workers; • Compositionality: (vi) administrator reuse – the administrator should be programmed in such a way that it can be reused by substituting only the part responsible for decomposing requests and composing replies (parameterization, inheritance, or other techniques may be appropriate). There are other aspects of language design, apart from the concurrency features, that affect the creation and use of reusable administrator applications, such as the support for generic or parametrized classes and the support for dynamic binding. In our discussion however we concentrate on issues that are more specific to the concurrency features of languages and ignore those issues which would also arise in sequential languages. 4. Exploring the Language Design Space In order to explore and evaluate the COOPL design choices we have selected, we shall consider in turn (i) object models, (ii) client/server interaction and internal concurrency, (iii) administrator reusability. Throughout, we shall refer to the reusability requirements of §3 and we will make use of the administrator example to illustrate specific points. 4.1 Concurrent Object Models By the requirement of mutual exclusion, we can immediately discount the orthogonal object model as it provides no default protection for objects in the presence of concurrent requests. The reusability of workers and workload managers is clearly enhanced if they will function correctly independently of assumptions of sequential access. The heterogeneous model is similarly defective since one must explicitly distinguish between active and passive objects. A generic administrator would be less reusable if it would have to distinguish between active and passive workers. Similarly worker reusability is weakened if we can have different kinds of workers. The homogeneous object model is the most reasonable choice with respect to reusability. No distinction is made between active and passive objects. M. Papathomas and O. Nierstrasz 10 Note that it is not clear whether the performance gains one might expect of a heterogenous model are realizable since they depend on the programmer’s (static) assignment of objects to active or passive classes. With a homogeneous approach, the compiler could conceivably make such decisions based on local consideration – whether a component is shared by other concurrently executing objects is application specific and should be independent of the object type. 4.2 Client/Server Interaction As reusability is clearly enhanced when objects obey a standard protocol, we shall suppose that objects generally conform to a request/reply interface. If we provide an object model with communication primitives supporting only sequential RPC, we quickly discover that this is not enough to satisfy the requirements of the administrator. In particular, a sequential RPC administrator will not be able to interleave multiple clients’ requests as it will be forced to reply to a client before it can accept another request. The only “solution” under this assumption requires the cooperation of the client, for example: the administrator returns the name of a “request handler” proxy to the client, which the client must call to obtain the result (the request handler may make use of further “courier” proxies to call the workers). In this way the administrator is immediately free to accept new requests after returning the name of the request handler. We must either relax the sequentiality constraint or the strict RPC protocol. The possibilities are: (i) one-way message passing, (ii) explicit request/reply scheduling primitives (with or without proxies), and (iii) internal concurrency. Let us consider each of these from the administrator’s viewpoint. 4.2.1 Administrator Request Scheduling One-Way Message Passing An extreme solution in the direction of relaxing RPC is to support one-way synchronous or asynchronous message passing. In this case the administrator is free to accept messages and reply to them in the order it likes. One-way message passing has however some disadvantages. A concurrent client may issue several requests to the administrator before it gets a reply. In this case it is important for the client to know which reply corresponds to which request. Are replies returned in the same order as requests? In the case of synchronous message passing an additional difficulty is that the administrator may get blocked when it sends the reply until the client is willing to accept it. Requiring the client to accept the reply imposes additional requirements on the client and makes reuse more difficult. Either a different mechanism has to be supported for sending replies or proxies have to be created. Explicit Request/Reply Scheduling It is also possible to relax the RPC style of communication without going all the way to support one-way message passing as the main communication primitive. This has the advantage that it is possible to present an RPC interface to clients and, at the same time, obtain more flexibility for processing requests by the administrator. This possibility is illustrated by ABCL/1 [37] which permits the pairing of RPC interface at the client side with one way asynchronous message passing at the administrator’s side. Moreover the reply message does not have to be sent by 11 Supporting Software Reuse in Concurrent OOPLs the administrator object. This provides even more flexibility in the way that the administrator may handle requests. The following segment of code shows how this is accomplished. The RPC call at the client side looks like: result := [ administrator <== :someRequest arg1 ... argn] ... A message is sent to the administrator to execute the request someRequest with arguments arg1, ... ,argn. The client is blocked until the reply to the request is returned and the result is stored in the client’s local variable result. At the administrator’s side the client’s request is accepted by matching the message pattern: (=> :someRequest arg1 ... argn @ whereToReply .... actions executed in response to this request ... ) When the administrator accepts this request, the arguments are made available in the local variables arg1,...,argn and the reply destination of the request in the local variable whereToReply. The reply destination may be used as the target of a “past type” asynchronous message for returning the reply to the client. As a reply destination may also be passed around in messages it is possible for another object to send the reply message to the client. This action would look like: [ whereToReply <== result ] where whereToReply is a local variable containing the reply destination, obtained by the message acceptance statement shown above, and result is the result to the client’s request. Internal Concurrency Another way for allowing the administrator to process several concurrent requests is to support multiple concurrent or quasi-concurrent threads. In Hybrid, for example, the administrator may issue requests to workers by delegated calls, thus temporarily suspending the calling thread and freeing the administrator to accept new requests from clients. In such a case a thread is used for handling each request. RPC-like communication can be used with clients since new threads can be created for processing requests even if the reply to other requests has not yet been returned. 4.2.2 Administrator Reply Scheduling There are three main ways for the administrator to invoke the workers in parallel: (i) one-way message passing, (ii) proxies, and (iii) internal concurrency. One-Way Message Passing A difficulty with using one-way messages is getting the replies from workers. As there are several workers that are invoked in parallel and potentially concurrent invocations of single worker it is difficult for the administrator to tell which reply is associated with which request. A solution to this problem is, for each request, to create a proxy which carries out the request. The proxy may then send a message to the administrator containing the worker’s reply plus some extra information used for identifying the request. This solution has the advantage that the administrator will be triggered by the reply when it is available. Another solution is for the administrator at some later time to call this object to obtain the result. In this case there can be a problem if the administrator blocks to obtain a result that is not M. Papathomas and O. Nierstrasz 12 yet ready and thus ignores new client requests. This problem may be solved by introducing a non-blocking primitive for accepting requests, but reduces into polling the arrival of requests and replies. Polling could be avoided by a guarded command constructs where the guards could express both the acceptance of requests and arrival of replies. Proxies The administrator can call multiple workers in parallel by creating a number of “courier” proxies responsible for calling the workers and collecting the replies. The couriers may then be passed to another object responsible for composing the final result. Future variables [37] and CBox [38] mechanisms provide functionality which is somewhat similar to courier objects. Future variables, however, are not first class objects and so are not as flexible since they cannot be sent in messages to other objects. Internal Concurrency In this case a construct is provided for the creation of concurrent or quasi-concurrent threads.A worker can be called by each of these threads in an RPC fashion. With quasi-concurrent threads, a call to a worker should trigger the execution of another thread. Such constructs are provided in the original design of Hybrid [24] and in SR [5]. In SR the code segment of the administrator that is used for issuing requests to workers in parallel would look like this: . co result1 := w1.doWork(...) -> loadManager.terminated(w1) // result2 := w2.doWork(...) -> loadManager.terminated(w2) oc globalResult := computResult(result1,result2); ... With such an approach it is not necessary to artificially decompose the administrator into multiple objects and proxies to obtain parallelism. It should be noted that supporting multiple threads in this way is a different issue than using threads for servicing multiple client requests. For instance with the language SINA [33] it is possible to use several concurrent threads within an object for processing requests; there is no direct means, however, for one of these threads to create more threads for calling the worker objects in parallel. This is done indirectly by creating a courier object, as described above. It is therefore not necessarily redundant to support both multiple threads and non-blocking communication primitives. 4.3 Administrator Reusability We have concentrated thus far on reuse of objects without modification. For this it is clear that objects should support standard request/reply interfaces or other standard protocols. Problems of interference between concurrency and inheritance have been previously pointed by other researchers [17], [32], so we will only summarize and indicate some current trends. Interference between existing code to be reused (e.g., superclasses) and incremental modifications (e.g., subclass extensions) is due to (i) the difficulty of the additional code to synchronize with the existing code and (ii) the difficulty of the existing synchronization code to be open to modifications yet to be defined. Kafura and Lee therefore propose an approach to synchroni13 Supporting Software Reuse in Concurrent OOPLs zation based on explicit abstract states [17], though this approach does not adequately support request scheduling since activation conditions may not depend on message contents. The main point, however, is that we lack good abstractions for incremental modifications to object classes. The message-passing interface should not be viewed in the same way as the inheritance interface seen by subclasses [31]. Current work in this area attempts to “unbundle” the mechanisms for software composition either in terms of software templates or “habitats” [30] or by decomposing inheritance into more primitive mechanisms [7], [14].

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Concurrency in Object-Oriented Programming Languages

An essential motivation behind concurrent object-oriented programming is to exploit the software reuse potential of object-oriented features in the development of concurrent systems. Early attempts to introduce concurrency to object-oriented languages uncovered interferences between object-oriented and concurrency features that limited the extent to which the benefits of object-oriented program...

متن کامل

ATOM: An Active Object Model for Enhancing Reuse in the Development of Concurrent Software

Substantial research activity in the past few years concentrated on the design of languages and models for integrating concurrency and object-oriented features with the intention to enhance the potential for software reuse in the development of concurrent systems. Most of the work in the area has focused on the problem of specifying and reusing through inheritance synchronization constraints on...

متن کامل

Using Viewpoints , Frameworks , and Domain - Specific Languages to Enhance Software Reuse

Case studies have shown that high levels of software reuse can be achieved through the use of object-oriented frameworks. This paper describes a viewpoint-based design and instantiation method for framework development. This method uses the concept of viewpoints and the hot-spot relation in object-oriented design to guide the designer on the identification of hot-spots in the structure of the f...

متن کامل

2 Concurrency Issues in Object - Oriented Programming

The integration of concurrent and object-oriented programming, although promising, presents problems that have not yet been fully explored. In this paper we attempt to identify issues in the design of concurrent object-oriented languages that must be addressed to achieve a satisfactory integration of concurrency in the object-oriented framework. We consider the approaches followed by object-ori...

متن کامل

Parallel Object-Oriented Specification Language

The Parallel Object-Oriented Specification Language (POOSL) is an expressive modelling language for hardware/software systems [10]. It was originally defined in [7] as an object-oriented extension of process algebra CCS [6], supporting (conditional) synchronous message passing between (hierarchically structured) asynchronous concurrent processes. Meanwhile, POOSL has been extended with real-tim...

متن کامل

Towards an Object Calculus∗

The development of concurrent object-based programming languages has suffered from the lack of any generally accepted formal foundations for defining their semantics. Furthermore, the delicate relationship between object-oriented features supporting reuse and operational features concerning interaction and state change is poorly understood in a concurrent setting. To address this problem, we pr...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1990